Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 103
Filter
1.
Acta Academiae Medicinae Sinicae ; (6): 273-279, 2023.
Article in Chinese | WPRIM | ID: wpr-981263

ABSTRACT

Objective To evaluate the accuracy of different convolutional neural networks (CNN),representative deep learning models,in the differential diagnosis of ameloblastoma and odontogenic keratocyst,and subsequently compare the diagnosis results between models and oral radiologists. Methods A total of 1000 digital panoramic radiographs were retrospectively collected from the patients with ameloblastoma (500 radiographs) or odontogenic keratocyst (500 radiographs) in the Department of Oral and Maxillofacial Radiology,Peking University School of Stomatology.Eight CNN including ResNet (18,50,101),VGG (16,19),and EfficientNet (b1,b3,b5) were selected to distinguish ameloblastoma from odontogenic keratocyst.Transfer learning was employed to train 800 panoramic radiographs in the training set through 5-fold cross validation,and 200 panoramic radiographs in the test set were used for differential diagnosis.Chi square test was performed for comparing the performance among different CNN.Furthermore,7 oral radiologists (including 2 seniors and 5 juniors) made a diagnosis on the 200 panoramic radiographs in the test set,and the diagnosis results were compared between CNN and oral radiologists. Results The eight neural network models showed the diagnostic accuracy ranging from 82.50% to 87.50%,of which EfficientNet b1 had the highest accuracy of 87.50%.There was no significant difference in the diagnostic accuracy among the CNN models (P=0.998,P=0.905).The average diagnostic accuracy of oral radiologists was (70.30±5.48)%,and there was no statistical difference in the accuracy between senior and junior oral radiologists (P=0.883).The diagnostic accuracy of CNN models was higher than that of oral radiologists (P<0.001). Conclusion Deep learning CNN can realize accurate differential diagnosis between ameloblastoma and odontogenic keratocyst with panoramic radiographs,with higher diagnostic accuracy than oral radiologists.


Subject(s)
Humans , Ameloblastoma/diagnostic imaging , Deep Learning , Diagnosis, Differential , Radiography, Panoramic , Retrospective Studies , Odontogenic Cysts/diagnostic imaging , Odontogenic Tumors
2.
Journal of Clinical Otorhinolaryngology Head and Neck Surgery ; (12): 483-486, 2023.
Article in Chinese | WPRIM | ID: wpr-982772

ABSTRACT

Objective:To evaluate the diagnostic accuracy of the convolutional neural network(CNN) in diagnosing nasopharyngeal carcinoma using endoscopic narrowband imaging. Methods:A total of 834 cases with nasopharyngeal lesions were collected from the People's Hospital of Guangxi Zhuang Autonomous Region between 2014 and 2016. We trained the DenseNet201 model to classify the endoscopic images, evaluated its performance using the test dataset, and compared the results with those of two independent endoscopic experts. Results:The area under the ROC curve of the CNN in diagnosing nasopharyngeal carcinoma was 0.98. The sensitivity and specificity of the CNN were 91.90% and 94.69%, respectively. The sensitivity of the two expert-based assessment was 92.08% and 91.06%, respectively, and the specificity was 95.58% and 92.79%, respectively. There was no significant difference between the diagnostic accuracy of CNN and the expert-based assessment (P=0.282, P=0.085). Moreover, there was no significant difference in the accuracy in discriminating early-stage and late-stage nasopharyngeal carcinoma(P=0.382). The CNN model could rapidly distinguish nasopharyngeal carcinoma from benign lesions, with an image recognition time of 0.1 s/piece. Conclusion:The CNN model can quickly distinguish nasopharyngeal carcinoma from benign nasopharyngeal lesions, which can aid endoscopists in diagnosing nasopharyngeal lesions and reduce the rate of nasopharyngeal biopsy.


Subject(s)
Humans , Nasopharyngeal Carcinoma , Narrow Band Imaging , China , Neural Networks, Computer , Nasopharyngeal Neoplasms/diagnostic imaging
3.
Journal of Biomedical Engineering ; (6): 458-464, 2023.
Article in Chinese | WPRIM | ID: wpr-981563

ABSTRACT

Sleep staging is the basis for solving sleep problems. There's an upper limit for the classification accuracy of sleep staging models based on single-channel electroencephalogram (EEG) data and features. To address this problem, this paper proposed an automatic sleep staging model that mixes deep convolutional neural network (DCNN) and bi-directional long short-term memory network (BiLSTM). The model used DCNN to automatically learn the time-frequency domain features of EEG signals, and used BiLSTM to extract the temporal features between the data, fully exploiting the feature information contained in the data to improve the accuracy of automatic sleep staging. At the same time, noise reduction techniques and adaptive synthetic sampling were used to reduce the impact of signal noise and unbalanced data sets on model performance. In this paper, experiments were conducted using the Sleep-European Data Format Database Expanded and the Shanghai Mental Health Center Sleep Database, and achieved an overall accuracy rate of 86.9% and 88.9% respectively. When compared with the basic network model, all the experimental results outperformed the basic network, further demonstrating the validity of this paper's model, which can provide a reference for the construction of a home sleep monitoring system based on single-channel EEG signals.


Subject(s)
China , Sleep Stages , Sleep , Electroencephalography , Databases, Factual
4.
Journal of Biomedical Engineering ; (6): 257-264, 2023.
Article in Chinese | WPRIM | ID: wpr-981537

ABSTRACT

Macaque is a common animal model in drug safety assessment. Its behavior reflects its health condition before and after drug administration, which can effectively reveal the side effects of drugs. At present, researchers usually rely on artificial methods to observe the behavior of macaque, which cannot achieve uninterrupted 24-hour monitoring. Therefore, it is urgent to develop a system to realize 24-hour observation and recognition of macaque behavior. In order to solve this problem, this paper constructs a video dataset containing nine kinds of macaque behaviors (MBVD-9), and proposes a network called Transformer-augmented SlowFast for macaque behavior recognition (TAS-MBR) based on this dataset. Specifically, the TAS-MBR network converts the red, green and blue (RGB) color mode frame input by its fast branches into residual frames on the basis of SlowFast network and introduces the Transformer module after the convolution operation to obtain sports information more effectively. The results show that the average classification accuracy of TAS-MBR network for macaque behavior is 94.53%, which is significantly improved compared with the original SlowFast network, proving the effectiveness and superiority of the proposed method in macaque behavior recognition. This work provides a new idea for the continuous observation and recognition of the behavior of macaque, and lays the technical foundation for the calculation of monkey behaviors before and after medication in drug safety evaluation.


Subject(s)
Animals , Electric Power Supplies , Macaca , Recognition, Psychology
5.
Journal of Biomedical Engineering ; (6): 51-59, 2023.
Article in Chinese | WPRIM | ID: wpr-970673

ABSTRACT

Fetal electrocardiogram (ECG) signals provide important clinical information for early diagnosis and intervention of fetal abnormalities. In this paper, we propose a new method for fetal ECG signal extraction and analysis. Firstly, an improved fast independent component analysis method and singular value decomposition algorithm are combined to extract high-quality fetal ECG signals and solve the waveform missing problem. Secondly, a novel convolutional neural network model is applied to identify the QRS complex waves of fetal ECG signals and effectively solve the waveform overlap problem. Finally, high quality extraction of fetal ECG signals and intelligent recognition of fetal QRS complex waves are achieved. The method proposed in this paper was validated with the data from the PhysioNet computing in cardiology challenge 2013 database of the Complex Physiological Signals Research Resource Network. The results show that the average sensitivity and positive prediction values of the extraction algorithm are 98.21% and 99.52%, respectively, and the average sensitivity and positive prediction values of the QRS complex waves recognition algorithm are 94.14% and 95.80%, respectively, which are better than those of other research results. In conclusion, the algorithm and model proposed in this paper have some practical significance and may provide a theoretical basis for clinical medical decision making in the future.


Subject(s)
Algorithms , Neural Networks, Computer , Electrocardiography , Databases, Factual , Fetus
6.
China Journal of Chinese Materia Medica ; (24): 829-834, 2023.
Article in Chinese | WPRIM | ID: wpr-970553

ABSTRACT

In the digital transformation of Chinese pharmaceutical industry, how to efficiently govern and analyze industrial data and excavate the valuable information contained therein to guide the production of drug products has always been a research hotspot and application difficulty. Generally, the Chinese pharmaceutical technique is relatively extensive, and the consistency of drug quality needs to be improved. To address this problem, we proposed an optimization method combining advanced calculation tools(e.g., Bayesian network, convolutional neural network, and Pareto multi-objective optimization algorithm) with lean six sigma tools(e.g., Shewhart control chart and process performance index) to dig deeply into historical industrial data and guide the continuous improvement of pharmaceutical processes. Further, we employed this strategy to optimize the manufacturing process of sporoderm-removal Ganoderma lucidum spore powder. After optimization, we preliminarily obtained the possible interval combination of critical parameters to ensure the P_(pk) values of the critical quality properties including moisture, fineness, crude polysaccharide, and total triterpenes of the sporoderm-removal G. lucidum spore powder to be no less than 1.33. The results indicate that the proposed strategy has an industrial application value.


Subject(s)
Bayes Theorem , Data Mining , Drug Industry , Powders , Reishi , Spores, Fungal
7.
Chinese Journal of Schistosomiasis Control ; (6): 121-127, 2023.
Article in Chinese | WPRIM | ID: wpr-973695

ABSTRACT

Objective To develop an intelligent recognition model based on deep learning algorithms of unmanned aerial vehicle (UAV) images, and to preliminarily explore the value of this model for remote identification, monitoring and management of cattle, a source of Schistosoma japonicum infection. Methods Oncomelania hupensis snail-infested marshlands around the Poyang Lake area were selected as the study area. Image datasets of the study area were captured by aerial photography with UAV and subjected to augmentation. Cattle in the sample database were annotated with the annotation software VGG Image Annotator to create the morphological recognition labels for cattle. A model was created for intelligent recognition of livestock based on deep learning-based Mask R-convolutional neural network (CNN) algorithms. The performance of the model for cattle recognition was evaluated with accuracy, precision, recall, F1 score and mean precision. Results A total of 200 original UAV images were obtained, and 410 images were yielded following data augmentation. A total of 2 860 training samples of cattle recognition were labeled. The created deep learning-based Mask R-CNN model converged following 200 iterations, with an accuracy of 88.01%, precision of 92.33%, recall of 94.06%, F1 score of 93.19%, and mean precision of 92.27%, and the model was effective to detect and segment the morphological features of cattle. Conclusion The deep learning-based Mask R-CNN model is highly accurate for recognition of cattle based on UAV images, which is feasible for remote intelligent recognition, monitoring, and management of the source of S. japonicum infection.

8.
Journal of Sun Yat-sen University(Medical Sciences) ; (6): 430-438, 2023.
Article in Chinese | WPRIM | ID: wpr-973239

ABSTRACT

ObjectiveArtificial intelligence (AI) full smear automated diatom detection technology can perform forensic pathology drowning diatom detection more quickly and efficiently than human experts.However, this technique was only used in conjunction with the strong acid digestion method, which has a low extraction rate of diatoms. In this study, we propose to use the more efficient proteinase K tissue digestion method (hereinafter referred to as enzyme digestion method) as a diatom extraction method to investigate the generalization ability and feasibility of this technique in other diatom extraction methods. MethodsLung tissues from 6 drowned cadavers were collected for proteinase K ablation and made into smears, and the smears were digitized using the digital image matrix cutting method and a diatom and background database was established accordingly.The data set was divided into training set, validation set and test set in the ratio of 3:1:1, and the convolutional neural network (CNN) models were trained, internally validated, and externally tested on the basis of ImageNet pre-training. ResultsThe results showed that the accuracy rate of the external test of the best model was 97.65 %, and the area where the model features were extracted was the area where the diatoms were located. The best CNN model in practice had a precision of more than 80 % for diatom detection of drowned corpses. ConclusionIt is shown that the AI automated diatom detection technique based on CNN model and enzymatic digestion method in combination can efficiently identify diatoms and can be used as an auxiliary method for diatom detection in drowning identification.

9.
Chinese Journal of Digestive Endoscopy ; (12): 189-195, 2023.
Article in Chinese | WPRIM | ID: wpr-995373

ABSTRACT

Objective:To evaluate artificial intelligence constructed by deep convolutional neural network (DCNN) for the site identification in upper gastrointestinal endoscopy.Methods:A total of 21 310 images of esophagogastroduodenoscopy from the Cancer Hospital of Chinese Academy of Medical Sciences from January 2019 to June 2021 were collected. A total of 19 191 images of them were used to construct site identification model, and the remaining 2 119 images were used for verification. The performance differences of two models constructed by DCCN in the identification of 30 sites of the upper digestive tract were compared. One model was the traditional ResNetV2 model constructed by Inception-ResNetV2 (ResNetV2), the other was a hybrid neural network RESENet model constructed by Inception-ResNetV2 and Squeeze-Excitation Networks (RESENet). The main indices were the accuracy, the sensitivity, the specificity, positive predictive value (PPV) and negative predictive value (NPV).Results:The accuracy, the sensitivity, the specificity, PPV and NPV of ResNetV2 model in the identification of 30 sites of the upper digestive tract were 94.62%-99.10%, 30.61%-100.00%, 96.07%-99.56%, 42.26%-86.44% and 97.13%-99.75%, respectively. The corresponding values of RESENet model were 98.08%-99.95%, 92.86%-100.00%, 98.51%-100.00%, 74.51%-100.00% and 98.85%-100.00%, respectively. The mean accuracy, mean sensitivity, mean specificity, mean PPV and mean NPV of ResNetV2 model were 97.60%, 75.58%, 98.75%, 63.44% and 98.76%, respectively. The corresponding values of RESENet model were 99.34% ( P<0.001), 99.57% ( P<0.001), 99.66% ( P<0.001), 90.20% ( P<0.001) and 99.66% ( P<0.001). Conclusion:Compared with the traditional ResNetV2 model, the artificial intelligence-assisted site identification model constructed by RESENNet, a hybrid neural network, shows significantly improved performance. This model can be used to monitor the integrity of the esophagogastroduodenoscopic procedures and is expected to become an important assistant for standardizing and improving quality of the procedures, as well as an significant tool for quality control of esophagogastroduodenoscopy.

10.
Journal of Prevention and Treatment for Stomatological Diseases ; (12): 603-608, 2023.
Article in Chinese | WPRIM | ID: wpr-972255

ABSTRACT

@#Facial symmetry evaluation has always been a hot topic of concern for doctors who engage in the study of facial beauty disciplines such as orthodontics, dentistry, and plastic surgery. Although scholars at home and abroad have carried out much research on the evaluation of facial symmetry with a variety of emerging technologies and methods, there is still a lack of unified standards for the evaluation of facial asymmetry due to the complexity of the content and methods and individual subjectivity. Facial asymmetry involves changes in the length, width and height of the face. It is a complex dental and maxillofacial malformation whose early identification and accurate evaluation are particularly important. Clinically, in addition to the necessary dental and maxillofacial examinations, it is also necessary to evaluate facial asymmetry with the help of corresponding auxiliary methods. This paper gives a summary of the commonly used three-dimensional evaluation methods. The evaluation methods of facial asymmetry can be divided into 5 categories: qualitative analysis, quantitative analysis, dynamic analysis, mathematical analysis, and artificial intelligence analysis. After the analysis and summarization of the characteristics, advantages and limitations of each method in clinical applications, it is found that although these methods vary in accuracy, evaluation scope, diagnosis nature and calculation method, etc., the three-dimensional evaluation methods are more objective, more accurate and more convenient and will become the mainstream evaluation method for facial asymmetry with further development of three-dimensional measurement technologies.

11.
São Paulo med. j ; 140(6): 837-845, Nov.-Dec. 2022. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1410230

ABSTRACT

ABSTRACT BACKGROUND: Artificial intelligence (AI) deals with development of algorithms that seek to perceive one's environment and perform actions that maximize one's chance of successfully reaching one's predetermined goals. OBJECTIVE: To provide an overview of the basic principles of AI and its main studies in the fields of glaucoma, retinopathy of prematurity, age-related macular degeneration and diabetic retinopathy. From this perspective, the limitations and potential challenges that have accompanied the implementation and development of this new technology within ophthalmology are presented. DESIGN AND SETTING: Narrative review developed by a research group at the Universidade Federal de São Paulo (UNIFESP), São Paulo (SP), Brazil. METHODS: We searched the literature on the main applications of AI within ophthalmology, using the keywords "artificial intelligence", "diabetic retinopathy", "macular degeneration age-related", "glaucoma" and "retinopathy of prematurity," covering the period from January 1, 2007, to May 3, 2021. We used the MEDLINE database (via PubMed) and the LILACS database (via Virtual Health Library) to identify relevant articles. RESULTS: We retrieved 457 references, of which 47 were considered eligible for intensive review and critical analysis. CONCLUSION: Use of technology, as embodied in AI algorithms, is a way of providing an increasingly accurate service and enhancing scientific research. This forms a source of complement and innovation in relation to the daily skills of ophthalmologists. Thus, AI adds technology to human expertise.

12.
Rev. mex. ing. bioméd ; 43(3): 1280, Sep.-Dec. 2022. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1450143

ABSTRACT

ABSTRACT Segmentation is vital in Optical Coherence Tomography Angiography (OCT-A) images. The separation and distinction of the different parts that build the macula simplify the subsequent detection of observable patterns/illnesses in the retina. In this work, we carried out multi-class image segmentation where the best characteristics are highlighted in the appropriate plexuses by comparing different neural network architectures, including U-Net, ResU-Net, and FCN. We focus on two critical zones: retinal vasculature (RV) and foveal avascular zone (FAZ). The precision obtained from the RV and FAZ segmentation over 316 OCT-A images from the OCT-A 500 database at 93.21% and 92.59%, where the FAZ was segmented with an accuracy of 99.83% for binary classification.


RESUMEN La segmentación juega un papel vital en las imágenes de angiografía por tomografía de coherencia óptica (OCT-A), ya que la separación y distinción de las diferentes partes que forman la mácula simplifican la detección posterior de patrones/enfermedades observables en la retina. En este trabajo, llevamos a cabo una segmentación de imágenes multiclase donde se destacan las mejores características en los plexos apropiados al comparar diferentes arquitecturas de redes neuronales, incluidas U-Net, ResU-Net y FCN. Nos centramos en dos zonas críticas: la segmentación de la vasculatura retiniana (RV) y la zona avascular foveal (FAZ). La precisión para RV y FAZ en 316 imágenes OCT-A de la base de datos OCT-A 500 se obtuvo en 93.21 % y 92.59 %. Cuando se segmentó la FAZ en una clasificación binaria, con un 99.83% de precisión.

13.
Article | IMSEAR | ID: sea-218735

ABSTRACT

In this digital era, face recognition system plays a vital role in almost every sector. Face recognition is one of the most implemented biometrics across various different fields. Classroom attendance check is a contributing factor to student participation and the final success in the courses. Every institute follows their own way for taking attendance. Some are taking attendance manually using papers or a register file or different biometric techniques. Taking attendance by calling out names or passing around an attendance sheet is time-consuming, and the latter is open to easy fraud. In this paper, the comparative analysis of various existing approaches on attendance management system based on facial recognition that are used to monitor attendance in various institutions using Fingerprint, GPS, RFID etc. is discussed with their limitations.

14.
Journal of Environmental and Occupational Medicine ; (12): 41-46, 2022.
Article in Chinese | WPRIM | ID: wpr-960368

ABSTRACT

Background Diagnosis of pneumoconiosis by radiologist reading chest X-ray images is affected by many factors and is prone to misdiagnosis/missed diagnosis. With the rapid development of artificial intelligence in the field of medical imaging, whether artificial intelligence can be used to read images of pneumoconiosis deserves consideration. Objective Three deep learning models for identifying presence of pneumoconiosis were constructed based on deep convolutional neural network. An optimal model was selected by comparing diagnostic efficiency of the three models. Methods Digital radiography (DR) chest images were collected between June 2017 and December 2020 from 7 hospitals and standard radiograph quality control protocol was also followed. The DR chest images with positive results were classified into the positive group, while those without pneumoconiosis were classified into the negative group. The collected chest radiographs were labeled by experts who had passed the assessment of reading radiographs,and the experts were constantly assessed for consistency in the labeling process based on an expectation-maximization algorithm. The labeled data were cleaned, archived, and preprocessed, and then were grouped into a training set and a verification set. Three deep convolutional neural network models TMNet, ResNet-50, and ResNeXt-50 were constructed and trained by ten-fold cross-validation method to obtain an optimal model. Five hundred cases of DR chest radiographs that were not included in the training set and the validation set were collected, and identified by five senior experts as the gold standard, named the test set. The accuracy rate, sensitivity, specificity, area under curve (AUC), and other indexes of the three models were derived after testing, and the efficiency of the three models was evaluated and compared. Results A total of 24867 DR chest radiographs of the training set and the validation set were collected in this study, including 6978 images in the positive group and 17889 images in the negative group. There were 312 cases of pulmonary abnormalities such as pneumothorax and pulmonary tuberculosis. A total of nine experts labeled the chest radiographs, the labeling consistency rate of pneumoconiosis (non-staging) was above 88%, and the labeling consistency rate of pneumoconiosis staging ranged from 84.68% to 93.66%. The diagnostic accuracy, sensitivity, specificity, and AUC of TMNet were 95.20%, 99.66%, 88.61%, and 0.987, respectively. The indicators of ResNeXt were 87.00%, 89.93%, 82.67%, and 0.911, respectively. Those of ResNet were 84.00%, 85.91%, 81.19%, and 0.912, respectively. All these indexes of TMNet were higher than those of ResNeXt-50 and ResNet-50 models. The AUC differences between TMNet and the other two models were both statistically significant (P<0.001). Conclusion All the three convolutional neural network models can effectively diagnose the presence of pneumoconiosis, among which TMNet provides the best efficiency.

15.
Journal of Biomedical Engineering ; (6): 1065-1073, 2022.
Article in Chinese | WPRIM | ID: wpr-970643

ABSTRACT

The effective classification of multi-task motor imagery electroencephalogram (EEG) is helpful to achieve accurate multi-dimensional human-computer interaction, and the high frequency domain specificity between subjects can improve the classification accuracy and robustness. Therefore, this paper proposed a multi-task EEG signal classification method based on adaptive time-frequency common spatial pattern (CSP) combined with convolutional neural network (CNN). The characteristics of subjects' personalized rhythm were extracted by adaptive spectrum awareness, and the spatial characteristics were calculated by using the one-versus-rest CSP, and then the composite time-domain characteristics were characterized to construct the spatial-temporal frequency multi-level fusion features. Finally, the CNN was used to perform high-precision and high-robust four-task classification. The algorithm in this paper was verified by the self-test dataset containing 10 subjects (33 ± 3 years old, inexperienced) and the dataset of the 4th 2018 Brain-Computer Interface Competition (BCI competition Ⅳ-2a). The average accuracy of the proposed algorithm for the four-task classification reached 93.96% and 84.04%, respectively. Compared with other advanced algorithms, the average classification accuracy of the proposed algorithm was significantly improved, and the accuracy range error between subjects was significantly reduced in the public dataset. The results show that the proposed algorithm has good performance in multi-task classification, and can effectively improve the classification accuracy and robustness.


Subject(s)
Humans , Adult , Imagination , Neural Networks, Computer , Imagery, Psychotherapy/methods , Electroencephalography/methods , Algorithms , Brain-Computer Interfaces , Signal Processing, Computer-Assisted
16.
Journal of Forensic Medicine ; (6): 223-230, 2022.
Article in English | WPRIM | ID: wpr-984113

ABSTRACT

OBJECTIVES@#To apply the convolutional neural network (CNN) Inception_v3 model in automatic identification of acceleration and deceleration injury based on CT images of brain, and to explore the application prospect of deep learning technology in forensic brain injury mechanism inference.@*METHODS@#CT images from 190 cases with acceleration and deceleration brain injury were selected as the experimental group, and CT images from 130 normal brain cases were used as the control group. The above-mentioned 320 imaging data were divided into training validation dataset and testing dataset according to random sampling method. The model classification performance was evaluated by the accuracy rate, precision rate, recall rate, F1-value and AUC value.@*RESULTS@#In the training process and validation process, the accuracy rate of the model to classify acceleration injury, deceleration injury and normal brain was 99.00% and 87.21%, which met the requirements. The optimized model was used to test the data of the testing dataset, the result showed that the accuracy rate of the model in the test set was 87.18%, and the precision rate, recall rate, F1-score and AUC of the model to recognize acceleration injury were 84.38%, 90.00%, 87.10% and 0.98, respectively, to recognize deceleration injury were 86.67%, 72.22%, 78.79% and 0.92, respectively, to recognize normal brain were 88.57%, 89.86%, 89.21% and 0.93, respectively.@*CONCLUSIONS@#Inception_v3 model has potential application value in distinguishing acceleration and deceleration injury based on brain CT images, and is expected to become an auxiliary tool to infer the mechanism of head injury.


Subject(s)
Humans , Brain/diagnostic imaging , Brain Injuries , Deep Learning , Neural Networks, Computer
17.
Journal of Forensic Medicine ; (6): 31-39, 2022.
Article in English | WPRIM | ID: wpr-984092

ABSTRACT

OBJECTIVES@#To select four algorithms with relatively balanced complexity and accuracy among deep learning image classification algorithms for automatic diatom recognition, and to explore the most suitable classification algorithm for diatom recognition to provide data reference for automatic diatom testing research in forensic medicine.@*METHODS@#The "diatom" and "background" small sample size data set (20 000 images) of digestive fluid smear of corpse lung tissue in water were built to train, validate and test four convolutional neural network (CNN) models, including VGG16, ResNet50, InceptionV3 and Inception-ResNet-V2. The receiver operating characteristic curve (ROC) of subjects and confusion matrixes were drawn, recall rate, precision rate, specificity, accuracy rate and F1 score were calculated, and the performance of each model was systematically evaluated.@*RESULTS@#The InceptionV3 model achieved much better results than the other three models with a balanced recall rate of 89.80%, a precision rate of 92.58%. The VGG16 and Inception-ResNet-V2 had similar diatom recognition performance. Although the performance of diatom recall and precision detection could not be balanced, the recognition ability was acceptable. ResNet50 had the lowest diatom recognition performance, with a recall rate of 55.35%. In terms of feature extraction, the four models all extracted the features of diatom and background and mainly focused on diatom region as the main identification basis.@*CONCLUSIONS@#Including the Inception-dependent model, which has stronger directivity and targeting in feature extraction of diatom. The InceptionV3 achieved the best performance on diatom identification and feature extraction compared to the other three models. The InceptionV3 is more suitable for daily forensic diatom examination.


Subject(s)
Humans , Algorithms , Deep Learning , Diatoms , Neural Networks, Computer , ROC Curve
18.
Int. j. morphol ; 40(2): 407-413, 2022. ilus
Article in English | LILACS | ID: biblio-1385603

ABSTRACT

SUMMARY: This study aims to extract teeth and alveolar bone structures in CBCT images automatically, which is a key step in CBCT image analysis in the field of stomatology. In this study, semantic segmentation was used for automatic segmentation. Five marked classes of CBCT images were input for U-net neural network training. Tooth hard tissue (including enamel, dentin, and cementum), dental pulp cavity, cortical bone, cancellous bone, and other tissues were marked manually in each class. The output data were from different regions of interest. The network configuration and training parameters were optimized and adjusted according to the prediction effect. This method can be used to segment teeth and peripheral bone structures using CBCT. The time of the automatic segmentation process for each CBCT was less than 13 min. The Dice of the evaluation reference image was 98 %. The U-net model combined with the watershed method can effectively segment the teeth, pulp cavity, and cortical bone in CBCT images. It can provide morphological information for clinical treatment.


RESUMEN: El objetivo del presente estudio fue extraer estructuras dentarias y óseas alveolares desde imágenes CBCT automáticamente, lo cual es un paso clave en el análisis de imágenes CBCT en el campo de la estomatología. En este estudio, se utilizó la segmentación de tipo emántica para la segmentación automática. Se ingresaron cinco clases de imágenes CBCT marcadas, para el entrenamiento de la red neuronal U-net. El tejido duro del diente (incluidos esmalte, dentina y cemento), la cavidad de la pulpa dentaria, hueso cortical, hueso esponjoso y otros tejidos se marcaron manualmente en cada clase. Los datos se obtuvieron de diferentes regiones de interés. La configuración de la red y los parámetros de entrenamiento se optimizaron y ajustaron de acuerdo con un análisis predictivo. Este método se puede utilizar para segmentar dientes y estructuras óseas periféricas mediante CBCT. El tiempo del proceso de segmentación automática para cada CBCT fue menor a 13 min. El "Dice" de evaluación de la imagen de referencia fue de 98 %. El modelo U-net combinado con el método "watershed"puede segmentar eficazmente los dientes, la cavidad pulpar y el hueso cortical en imágenes CBCT. Puede proporcionar información morfológica para el tratamiento clínico.


Subject(s)
Humans , Tooth/diagnostic imaging , Dental Pulp/diagnostic imaging , Cone-Beam Computed Tomography , Tooth/anatomy & histology , Artificial Intelligence , Dental Pulp/anatomy & histology , Nerve Net
19.
Journal of Biomedical Engineering ; (6): 561-569, 2022.
Article in Chinese | WPRIM | ID: wpr-939624

ABSTRACT

Blood velocity inversion based on magnetoelectric effect is helpful for the development of daily monitoring of vascular stenosis, but the accuracy of blood velocity inversion and imaging resolution still need to be improved. Therefore, a convolutional neural network (CNN) based inversion imaging method for intravascular blood flow velocity was proposed in this paper. Firstly, unsupervised learning CNN is constructed to extract weight matrix representation information to preprocess voltage data. Then the preprocessing results are input to supervised learning CNN, and the blood flow velocity value is output by nonlinear mapping. Finally, angiographic images are obtained. In this paper, the validity of the proposed method is verified by constructing data set. The results show that the correlation coefficients of blood velocity inversion in vessel location and stenosis test are 0.884 4 and 0.972 1, respectively. The above research shows that the proposed method can effectively reduce the information loss during the inversion process and improve the inversion accuracy and imaging resolution, which is expected to assist clinical diagnosis.


Subject(s)
Humans , Angiography , Blood Flow Velocity , Constriction, Pathologic , Neural Networks, Computer
20.
Journal of Biomedical Engineering ; (6): 471-479, 2022.
Article in Chinese | WPRIM | ID: wpr-939614

ABSTRACT

The count and recognition of white blood cells in blood smear images play an important role in the diagnosis of blood diseases including leukemia. Traditional manual test results are easily disturbed by many factors. It is necessary to develop an automatic leukocyte analysis system to provide doctors with auxiliary diagnosis, and blood leukocyte segmentation is the basis of automatic analysis. In this paper, we improved the U-Net model and proposed a segmentation algorithm of leukocyte image based on dual path and atrous spatial pyramid pooling. Firstly, the dual path network was introduced into the feature encoder to extract multi-scale leukocyte features, and the atrous spatial pyramid pooling was used to enhance the feature extraction ability of the network. Then the feature decoder composed of convolution and deconvolution was used to restore the segmented target to the original image size to realize the pixel level segmentation of blood leukocytes. Finally, qualitative and quantitative experiments were carried out on three leukocyte data sets to verify the effectiveness of the algorithm. The results showed that compared with other representative algorithms, the proposed blood leukocyte segmentation algorithm had better segmentation results, and the mIoU value could reach more than 0.97. It is hoped that the method could be conducive to the automatic auxiliary diagnosis of blood diseases in the future.


Subject(s)
Algorithms , Leukocytes
SELECTION OF CITATIONS
SEARCH DETAIL